Goto

Collaborating Authors

 ai literacy


Calibrated Trust in Dealing with LLM Hallucinations: A Qualitative Study

Ryser, Adrian, Allwein, Florian, Schlippe, Tim

arXiv.org Artificial Intelligence

Hallucinations are outputs by Large Language Models (LLMs) that are factually incorrect yet appear plausible [1]. This paper investigates how such hallucinations influence users' trust in LLMs and users' interaction with LLMs. To explore this in everyday use, we conducted a qualitative study with 192 participants. Our findings show that hallucinations do not result in blanket mistrust but instead lead to context-sensitive trust calibration. Building on the calibrated trust model by Lee & See [2] and Afroogh et al.'s trust-related factors [3], we confirm expectancy [3], [4], prior experience [3], [4], [5], and user expertise & domain knowledge [3], [4] as userrelated (human) trust factors, and identify intuition as an additional factor relevant for hallucination detection. Additionally, we found that trust dynamics are further influenced by contextual factors, particularly perceived risk [3] and decision stakes [6]. Consequently, we validate the recursive trust calibration process proposed by Blöbaum [7] and extend it by including intuition as a user-related trust factor. Based on these insights, we propose practical recommendations for responsible and reflective LLM use.


Artificial Intelligence Competence of K-12 Students Shapes Their AI Risk Perception: A Co-occurrence Network Analysis

Heilala, Ville, Sikström, Pieta, Setälä, Mika, Kärkkäinen, Tommi

arXiv.org Artificial Intelligence

As artificial intelligence (AI) becomes increasingly integrated into education, understanding how students perceive its risks is essential for supporting responsible and effective adoption. This research aimed to examine the relationships between perceived AI competence and risks among Finnish K-12 upper secondary students (n = 163) by utilizing a co-occurrence analysis. Students reported their self-perceived AI competence and concerns related to AI across systemic, institutional, and personal domains. The findings showed that students with lower competence emphasized personal and learning-related risks, such as reduced creativity, lack of critical thinking, and misuse, whereas higher-competence students focused more on systemic and institutional risks, including bias, inaccuracy, and cheating. These differences suggest that students' self-reported AI competence is related to how they evaluate both the risks and opportunities associated with artificial intelligence in education (AIED). The results of this study highlight the need for educational institutions to incorporate AI literacy into their curricula, provide teacher guidance, and inform policy development to ensure personalized opportunities for utilization and equitable integration of AI into K-12 education.


Beyond Awareness: Investigating How AI and Psychological Factors Shape Human Self-Confidence Calibration

Cau, Federico Maria, Spano, Lucio Davide

arXiv.org Artificial Intelligence

Human-AI collaboration outcomes depend strongly on human self-confidence calibration, which drives reliance or resistance toward AI's suggestions. This work presents two studies examining whether calibration of self-confidence before decision tasks, low versus high levels of Need for Cognition (NFC), and Actively Open-Minded Thinking (AOT), leads to differences in decision accuracy, self-confidence appropriateness during the tasks, and metacognitive perceptions (global and affective). The first study presents strategies to identify well-calibrated users, also comparing decision accuracy and the appropriateness of self-confidence across NFC and AOT levels. The second study investigates the effects of calibrated self-confidence in AI-assisted decision-making (no AI, two-stage AI, and personalized AI), also considering different NFC and AOT levels. Our results show the importance of human self-confidence calibration and psychological traits when designing AI-assisted decision systems. We further propose design recommendations to address the challenge of calibrating self-confidence and supporting tailored, user-centric AI that accounts for individual traits.


AI Literacy Assessment Revisited: A Task-Oriented Approach Aligned with Real-world Occupations

Bogart, Christopher, Warrier, Aparna, Agarwal, Arav, Higashi, Ross, Zhang, Yufan, Flot, Jesse, Savelka, Jaromir, Burte, Heather, Sakr, Majd

arXiv.org Artificial Intelligence

As artificial intelligence (AI) systems become ubiquitous in professional contexts, there is an urgent need to equip workers, often with backgrounds outside of STEM, with the skills to use these tools effectively as well as responsibly, that is, to be AI literate. However, prevailing definitions and therefore assessments of AI literacy often emphasize foundational technical knowledge, such as programming, mathematics, and statistics, over practical knowledge such as interpreting model outputs, selecting tools, or identifying ethical concerns. This leaves a noticeable gap in assessing someone's AI literacy for real-world job use. We propose a work-task-oriented assessment model for AI literacy which is grounded in the competencies required for effective use of AI tools in professional settings. We describe the development of a novel AI literacy assessment instrument, and accompanying formative assessments, in the context of a US Navy robotics training program. The program included training in robotics and AI literacy, as well as a competition with practical tasks and a multiple choice scenario task meant to simulate use of AI in a job setting. We found that, as a measure of applied AI literacy, the competition's scenario task outperformed the tests we adopted from past research or developed ourselves. We argue that when training people for AI-related work, educators should consider evaluating them with instruments that emphasize highly contextualized practical skills rather than abstract technical knowledge, especially when preparing workers without technical backgrounds for AI-integrated roles.


"I Like That You Have to Poke Around": Instructors on How Experiential Approaches to AI Literacy Spark Inquiry and Critical Thinking

Warrier, Aparna Maya, Agarwal, Arav, Savelka, Jaromir, Bogart, Christopher, Burte, Heather

arXiv.org Artificial Intelligence

As artificial intelligence (AI) increasingly shapes decision-making across domains, there is a growing need to support AI literacy among learners beyond computer science. However, many current approaches rely on programming-heavy tools or abstract lecture-based content, limiting accessibility for non-STEM audiences. This paper presents findings from a study of AI User, a modular, web-based curriculum that teaches core AI concepts through interactive, no-code projects grounded in real-world scenarios. The curriculum includes eight projects; this study focuses on instructor feedback on Projects 5-8, which address applied topics such as natural language processing, computer vision, decision support, and responsible AI. Fifteen community college instructors participated in structured focus groups, completing the projects as learners and providing feedback through individual reflection and group discussion. Using thematic analysis, we examined how instructors evaluated the design, instructional value, and classroom applicability of these experiential activities. Findings highlight instructors' appreciation for exploratory tasks, role-based simulations, and real-world relevance, while also surfacing design trade-offs around cognitive load, guidance, and adaptability for diverse learners. This work extends prior research on AI literacy by centering instructor perspectives on teaching complex AI topics without code. It offers actionable insights for designing inclusive, experiential AI learning resources that scale across disciplines and learner backgrounds.


AI Literacy for Community Colleges: Instructors' Perspectives on Scenario-Based and Interactive Approaches to Teaching AI

Warrier, Aparna Maya, Agarwal, Arav, Savelka, Jaromir, Bogart, Christopher A, Burte, Heather

arXiv.org Artificial Intelligence

This research category full paper investigates how community college instructors evaluate interactive, no-code AI literacy resources designed for non-STEM learners. As artificial intelligence becomes increasingly integrated into everyday technologies, AI literacy - the ability to evaluate AI systems, communicate with them, and understand their broader impacts - has emerged as a critical skill across disciplines. Yet effective, scalable approaches for teaching these concepts in higher education remain limited, particularly for students outside STEM fields. To address this gap, we developed AI User, an interactive online curriculum that introduces core AI concepts through scenario - based activities set in real - world contexts. This study presents findings from four focus groups with instructors who engaged with AI User materials and participated in structured feedback activities. Thematic analysis revealed that instructors valued exploratory tasks that simulated real - world AI use cases and fostered experimentation, while also identifying challenges related to scaffolding, accessibility, and multi-modal support. A ranking task for instructional support materials showed a strong preference for interactive demonstrations over traditional educational materials like conceptual guides or lecture slides. These findings offer insights into instructor perspectives on making AI concepts more accessible and relevant for broad learner audiences. They also inform the design of AI literacy tools that align with diverse teaching contexts and support critical engagement with AI in higher education.


AI Literacy in UAE Libraries: Assessing Competencies, Training Needs, and Ethical Considerations for the Digital Age

Khan, Zafar Imam

arXiv.org Artificial Intelligence

This is the accepted manuscript version. The final published version will appear in College & Research Libraries, November 2026. AI Literacy in UAE Libraries: Assessing Competencies, Training Needs, and Ethical Considerations for the Digital Age Zafar Imam Khan, Learning Resources Manager, Hamdan Bin Mohammed Smart University, Dubai, United Arab Emirates, Email: zafarimamkhan@gmail.com, https://orcid.org/0000 - 0003 - 2081 - 0951 Abstract The study explores the current state of artificial intelligence (AI) literacy levels among library professionals employing a quantitative approach consisting of 92 surveys of LIS professionals in the United Arab Emirates (UAE). Findings of the study reveal ed the presence of strong cognitive competencies, while there were gaps observed in behavioral and normative competencies, especially related to AI biases, AI - powered learning, and ethical considerations. There was a disconnect observed between the perceiv ed importance of AI skills and the effectiveness of the current training programs. Introduction Generative AI has created massive disruption in all sectors, such as manufacturing, services, agriculture, medicine, and education, and has transformed a range of operations and services. Libraries are transforming and gearing up to harness the power of AI, which can enhance efficiency, accessibility, and personalization of services; thereby reshaping the traditional library landscape. This transformation has been observed in several of the traditional library services as AI is automating routine tasks such as cataloguing and classification of collections, and enhancing search functionalities and information retrieval, thereby creating a much more accurate and organized library system while librarians have more time to focus on intellectually stimulating act ivities (Preethi, 2024). There is a race to integrate AI into library services at a global level, and this has presented both opportunities and challenges in terms of AI literacy among library professionals. AI literacy involves understanding of AI tools, their applications, and ethical considerations surrounding their use.


AI & Data Competencies: Scaffolding holistic AI literacy in Higher Education

Kennedy, Kathleen, Gupta, Anuj

arXiv.org Artificial Intelligence

This chapter introduces the AI & Data Acumen Learning Outcomes Framework, a comprehensive tool designed to guide the integration of AI literacy across higher education. Developed through a collaborative process, the framework defines key AI and data-related competencies across four proficiency levels and seven knowledge dimensions. It provides a structured approach for educators to scaffold student learning in AI, balancing technical skills with ethical considerations and sociocultural awareness. The chapter outlines the framework's development process, its structure, and practical strategies for implementation in curriculum design, learning activities, and assessment. We address challenges in implementation and future directions for AI education. By offering a roadmap for developing students' holistic AI literacy, this framework prepares learners to leverage generative AI capabilities in both academic and professional contexts.


Why we should be skeptical of the hasty global push to test 15-year-olds' AI literacy in 2029

AIHub

Why we should be skeptical of the hasty global push to test 15-year-olds' AI literacy in 2029 If 2022 was the year OpenAI knocked our world off course with the launch of ChatGPT, 2025 will be remembered for the frenzied embrace of AI as the solution to everything. And, yes, this includes teaching and schoolwork. In today's breakneck AI innovation race, the Organization for Economic Co-operation and Development (OECD), along with the European Commission, have called for the development of unified AI literacy strategies in kindergarten to Grade 12 education. They have done this through an AI Literacy Framework developed with Code.org, and a range of experts in computational thinking, neuroscience, AI, educational technology and innovation -- and with "valuable insights" from the "TeachAI community ." The "TeachAI community" refers to a larger umbrella project providing web resources targeting teachers, education leaders and "solution providers" .


Defining a Strategic Action Plan for AI in Higher Education

Avouris, Nikolaos

arXiv.org Artificial Intelligence

We start with reviewing normative actions of international organizations and concerns expressed about the current technical landscape. Then we proceed with proposing a framework that comprises five key dimensions relating to the main challenges relating to AI in higher education institutions, followed by five key strategic actions that the main stakeholders need to take in order to address the current developments . W e map these actions to the main stakeholders of higher education and propose a deployment plan . This defines a framework along the dimensions: C hallenges, Actions, Stakeholders, Deployment CASD . Examples of AI specific actions at the institutional and individu al course level are also provided and discussed.